validation rmse
HiPreNets: High-Precision Neural Networks through Progressive Training
Mulle, Ethan, Kang, Wei, Gong, Qi
Deep neural networks are powerful tools for solving nonlinear problems in science and engineering, but training highly accurate models becomes challenging as problem complexity increases. Non-convex optimization and numerous hyperparameters to tune make performance improvement difficult, and traditional approaches often prioritize minimizing mean squared error (MSE) while overlooking $L^{\infty}$ error, which is the critical focus in many applications. To address these challenges, we present a progressive framework for training and tuning high-precision neural networks (HiPreNets). Our approach refines a previously explored staged training technique for neural networks that improves an existing fully connected neural network by sequentially learning its prediction residuals using additional networks, leading to improved overall accuracy. We discuss how to take advantage of the structure of the residuals to guide the choice of loss function, number of parameters to use, and ways to introduce adaptive data sampling techniques. We validate our framework's effectiveness through several benchmark problems.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > California > Santa Cruz County > Santa Cruz (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > California > Monterey County > Monterey (0.04)
- Government > Regional Government > North America Government > United States Government (0.67)
- Government > Military > Navy (0.46)
A Novel Stochastic LSTM Model Inspired by Quantum Machine Learning
Works in quantum machine learning (QML) over the past few years indicate that QML algorithms can function just as well as their classical counterparts, and even outperform them in some cases. Among the corpus of recent work, many current QML models take advantage of variational quantum algorithm (VQA) circuits, given that their scale is typically small enough to be compatible with NISQ devices and the method of automatic differentiation for optimizing circuit parameters is familiar to machine learning (ML). While the results bear interesting promise for an era when quantum machines are more readily accessible, if one can achieve similar results through non-quantum methods then there may be a more near-term advantage available to practitioners. To this end, the nature of this work is to investigate the utilization of stochastic methods inspired by a variational quantum version of the long short-term memory (LSTM) model in an attempt to approach the reported successes in performance and rapid convergence. By analyzing the performance of classical, stochastic, and quantum methods, this work aims to elucidate if it is possible to achieve some of QML's major reported benefits on classical machines by incorporating aspects of its stochasticity.
- North America > United States > South Carolina > Richland County > Columbia (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
Aboveground carbon biomass estimate with Physics-informed deep network
Nathaniel, Juan, Klein, Levente J., Watson, Campbell D., Nyirjesy, Gabrielle, Albrecht, Conrad M.
The global carbon cycle is a key process to understand how our climate is changing. However, monitoring the dynamics is difficult because a high-resolution robust measurement of key state parameters including the aboveground carbon biomass (AGB) is required. Here, we use deep neural network to generate a wall-to-wall map of AGB within the Continental USA (CONUS) with 30-meter spatial resolution for the year 2021. We combine radar and optical hyperspectral imagery, with a physical climate parameter of SIF-based GPP. Validation results show that a masked variation of UNet has the lowest validation RMSE of 37.93 $\pm$ 1.36 Mg C/ha, as compared to 52.30 $\pm$ 0.03 Mg C/ha for random forest algorithm. Furthermore, models that learn from SIF-based GPP in addition to radar and optical imagery reduce validation RMSE by almost 10% and the standard deviation by 40%. Finally, we apply our model to measure losses in AGB from the recent 2021 Caldor wildfire in California, and validate our analysis with Sentinel-based burn index.
- North America > United States > California (0.25)
- South America > Colombia (0.04)
- North America > United States > Tennessee > Anderson County > Oak Ridge (0.04)
- (2 more...)
Neural Network Training Using $\ell_1$-Regularization and Bi-fidelity Data
De, Subhayan, Doostan, Alireza
With the capability of accurately representing a functional relationship between the inputs of a physical system's model and output quantities of interest, neural networks have become popular for surrogate modeling in scientific applications. However, as these networks are over-parameterized, their training often requires a large amount of data. To prevent overfitting and improve generalization error, regularization based on, e.g., $\ell_1$- and $\ell_2$-norms of the parameters is applied. Similarly, multiple connections of the network may be pruned to increase sparsity in the network parameters. In this paper, we explore the effects of sparsity promoting $\ell_1$-regularization on training neural networks when only a small training dataset from a high-fidelity model is available. As opposed to standard $\ell_1$-regularization that is known to be inadequate, we consider two variants of $\ell_1$-regularization informed by the parameters of an identical network trained using data from lower-fidelity models of the problem at hand. These bi-fidelity strategies are generalizations of transfer learning of neural networks that uses the parameters learned from a large low-fidelity dataset to efficiently train networks for a small high-fidelity dataset. We also compare the bi-fidelity strategies with two $\ell_1$-regularization methods that only use the high-fidelity dataset. Three numerical examples for propagating uncertainty through physical systems are used to show that the proposed bi-fidelity $\ell_1$-regularization strategies produce errors that are one order of magnitude smaller than those of networks trained only using datasets from the high-fidelity models.
- North America > United States > Colorado > Boulder County > Boulder (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Research Report (1.00)
- Instructional Material (0.67)
- Energy (0.67)
- Government > Regional Government > North America Government > United States Government (0.46)
Uncertainty Quantification of Locally Nonlinear Dynamical Systems using Neural Networks
Models are often given in terms of differential equations to represent physical systems. In the presence of uncertainty, accurate prediction of the behavior of these systems using the models requires understanding the effect of uncertainty in the response. In uncertainty quantification, statistics such as mean and variance of the response of these physical systems are sought. To estimate these statistics sampling-based methods like Monte Carlo often require many evaluations of the models' governing equations for multiple realizations of the uncertainty. However, for large complex engineering systems, these methods become computationally burdensome. In structural engineering, often an otherwise linear structure contains spatially local nonlinearities with uncertainty present in them. A standard nonlinear solver for them with sampling-based methods for uncertainty quantification incurs significant computational cost for estimating the statistics of the response. To ease this computational burden of uncertainty quantification of large-scale locally nonlinear dynamical systems, a method is proposed herein, which decomposes the response into two parts -- response of a nominal linear system and a corrective term. This corrective term is the response from a pseudoforce that contains the nonlinearity and uncertainty information. In this paper, neural network, a recently popular tool for universal function approximation in the scientific machine learning community due to the advancement of computational capability as well as the availability of open-sourced packages like PyTorch and TensorFlow is used to estimate the pseudoforce. Since only the nonlinear and uncertain pseudoforce is modeled using the neural networks the same network can be used to predict a different response of the system and hence no new network is required to train if the statistic of a different response is sought.
- North America > United States > Colorado > Boulder County > Boulder (0.14)
- North America > United States > Pennsylvania > Lancaster County > Lancaster (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > District of Columbia > Washington (0.04)
On transfer learning of neural networks using bi-fidelity data for uncertainty propagation
De, Subhayan, Britton, Jolene, Reynolds, Matthew, Skinner, Ryan, Jansen, Kenneth, Doostan, Alireza
Due to their high degree of expressiveness, neural networks have recently been used as surrogate models for mapping inputs of an engineering system to outputs of interest. Once trained, neural networks are computationally inexpensive to evaluate and remove the need for repeated evaluations of computationally expensive models in uncertainty quantification applications. However, given the highly parameterized construction of neural networks, especially deep neural networks, accurate training often requires large amounts of simulation data that may not be available in the case of computationally expensive systems. In this paper, to alleviate this issue for uncertainty propagation, we explore the application of transfer learning techniques using training data generated from both high- and low-fidelity models. We explore two strategies for coupling these two datasets during the training procedure, namely, the standard transfer learning and the bi-fidelity weighted learning. In the former approach, a neural network model mapping the inputs to the outputs of interest is trained based on the low-fidelity data. The high-fidelity data is then used to adapt the parameters of the upper layer(s) of the low-fidelity network, or train a simpler neural network to map the output of the low-fidelity network to that of the high-fidelity model. In the latter approach, the entire low-fidelity network parameters are updated using data generated via a Gaussian process model trained with a small high-fidelity dataset. The parameter updates are performed via a variant of stochastic gradient descent with learning rates given by the Gaussian process model. Using three numerical examples, we illustrate the utility of these bi-fidelity transfer learning methods where we focus on accuracy improvement achieved by transfer learning over standard training approaches.
- North America > United States > Colorado > Boulder County > Boulder (0.14)
- North America > United States > California > Riverside County > Riverside (0.14)
- North America > United States > Texas > Travis County > Austin (0.04)
- (4 more...)
- Government > Regional Government > North America Government > United States Government (0.46)
- Energy > Energy Storage (0.46)